14 research outputs found

    Extending Construction X for Quantum Error-Correcting Codes

    Full text link
    In this paper we extend the work of Lisonek and Singh on construction X for quantum error-correcting codes to finite fields of order $p^2^ where p is prime. The results obtained are applied to the dual of Hermitian repeated root cyclic codes to generate new quantum error-correcting codes

    Computational Limitations in Robust Classification and Win-Win Results

    Get PDF
    We continue the study of statistical/computational tradeoffs in learning robust classifiers, following the recent work of Bubeck, Lee, Price and Razenshteyn who showed examples of classification tasks where (a) an efficient robust classifier exists, *in the small-perturbation regime*; (b) a non-robust classifier can be learned efficiently; but (c) it is computationally hard to learn a robust classifier, assuming the hardness of factoring large numbers. Indeed, the question of whether a robust classifier for their task exists in the large perturbation regime seems related to important open questions in computational number theory. In this work, we extend their work in three directions. First, we demonstrate classification tasks where computationally efficient robust classification is impossible, even when computationally unbounded robust classifiers exist. We rely on the hardness of decoding problems with preprocessing on codes and lattices. Second, we show hard-to-robustly-learn classification tasks *in the large-perturbation regime*. Namely, we show that even though an efficient classifier that is very robust (namely, tolerant to large perturbations) exists, it is computationally hard to learn any non-trivial robust classifier. Our first task relies on the existence of one-way functions, a minimal assumption in cryptography, and the second on the hardness of the learning parity with noise problem. In the latter setting, not only does a non-robust classifier exist, but also an efficient algorithm that generates fresh new labeled samples given access to polynomially many training examples (termed as generation by Kearns et. al. (1994)). Third, we show that any such counterexample implies the existence of cryptographic primitives such as one-way functions or even forms of public-key encryption. This leads us to a win-win scenario: either we can quickly learn an efficient robust classifier, or we can construct new instances of popular and useful cryptographic primitives

    Fine-Grained Cryptography

    Get PDF
    Fine-grained cryptographic primitives are ones that are secure against adversaries with an a-priori bounded polynomial amount of resources (time, space or parallel-time), where the honest algorithms use less resources than the adversaries they are designed to fool. Such primitives were previously studied in the context of time-bounded adversaries (Merkle, CACM 1978), space-bounded adversaries (Cachin and Maurer, CRYPTO 1997) and parallel-time-bounded adversaries (Håstad, IPL 1987). Our goal is come up with fine-grained primitives (in the setting of parallel-time-bounded adversaries) and to show unconditional security of these constructions when possible, or base security on widely believed separation of worst-case complexity classes. We show: 1. NC¹-cryptography: Under the assumption that Open image in new window, we construct one-way functions, pseudo-random generators (with sub-linear stretch), collision-resistant hash functions and most importantly, public-key encryption schemes, all computable in NC¹ and secure against all NC¹ circuits. Our results rely heavily on the notion of randomized encodings pioneered by Applebaum, Ishai and Kushilevitz, and crucially, make non-black-box use of randomized encodings for logspace classes. 2. AC⁰-cryptography: We construct (unconditionally secure) pseudo-random generators with arbitrary polynomial stretch, weak pseudo-random functions, secret-key encryption and perhaps most interestingly, collision-resistant hash functions, computable in AC⁰ and secure against all AC⁰ circuits. Previously, one-way permutations and pseudo-random generators (with linear stretch) computable in AC⁰ and secure against AC⁰ circuits were known from the works of Håstad and Braverman.United States. Defense Advanced Research Projects Agency (Contract W911NF-15-C-0226)United States. Army Research Office (Contract W911NF-15-C-0226

    Structure vs Hardness through the Obfuscation Lens

    Get PDF
    Much of modern cryptography, starting from public-key encryption and going beyond, is based on the hardness of structured (mostly algebraic) problems like factoring, discrete log, or finding short lattice vectors. While structure is perhaps what enables advanced applications, it also puts the hardness of these problems in question. In particular, this structure often puts them in low (and so called structured) complexity classes such as NP\capcoNP or statistical zero-knowledge (SZK). Is this structure really necessary? For some cryptographic primitives, such as one-way permutations and homomorphic encryption, we know that the answer is yes — they imply hard problems in NP\capcoNP and SZK, respectively. In contrast, one-way functions do not imply such hard problems, at least not by black-box reductions. Yet, for many basic primitives such as public-key encryption, oblivious transfer, and functional encryption, we do not have any answer. We show that the above primitives, and many others, do not imply hard problems in NP\capcoNP or SZK via black-box reductions. In fact, we first show that even the very powerful notion of Indistinguishability Obfuscation (IO) does not imply such hard problems, and then deduce the same for a large class of primitives that can be constructed from IO

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    On the Local Leakage Resilience of Linear Secret Sharing Schemes

    Get PDF
    We consider the following basic question: to what extent are standard secret sharing schemes and protocols for secure multiparty computation that build on them resilient to leakage? We focus on a simple local leakage model, where the adversary can apply an arbitrary function of a bounded output length to the secret state of each party, but cannot otherwise learn joint information about the states. We show that additive secret sharing schemes and high-threshold instances of Shamir’s secret sharing scheme are secure under local leakage attacks when the underlying field is of a large prime order and the number of parties is sufficiently large. This should be contrasted with the fact that any linear secret sharing scheme over a small characteristic field is clearly insecure under local leakage attacks, regardless of the number of parties. Our results are obtained via tools from Fourier analysis and additive combinatorics. We present two types of applications of the above results and techniques. As a positive application, we show that the “GMW protocol” for honest-but-curious parties, when implemented using shared products of random field elements (so-called “Beaver Triples”), is resilient in the local leakage model for sufficiently many parties and over certain fields. This holds even when the adversary has full access to a constant fraction of the views. As a negative application, we rule out multiparty variants of the share conversion scheme used in the 2-party homomorphic secret sharing scheme of Boyle et al. (Crypto 2016)

    Fine-grained Cryptography

    Get PDF
    Fine-grained cryptographic primitives are ones that are secure against adversaries with a-priori bounded polynomial resources (time, space or parallel-time), where the honest algorithms use less resources than the adversaries they are designed to fool. Such primitives were previously studied in the context of time-bounded adversaries (Merkle, CACM 1978), space-bounded adversaries (Cachin and Maurer, CRYPTO 1997) and parallel-time-bounded adversaries (Håstad, IPL 1987). Our goal is to show unconditional security of these constructions when possible, or base security on widely believed separation of worst-case complexity classes. We show: NC1^1-cryptography: Under the assumption that NC1^1 \neq \oplusL/poly, we construct one-way functions, pseudo-random generators (with sub-linear stretch), collision-resistant hash functions and most importantly, public-key encryption schemes, all computable in NC1^1 and secure against all NC1^1 circuits. Our results rely heavily on the notion of randomized encodings pioneered by Applebaum, Ishai and Kushilevitz, and crucially, make {\em non-black-box} use of randomized encodings for logspace classes. AC0^0-cryptography: We construct (unconditionally secure) pseudo-random generators with arbitrary polynomial stretch, weak pseudo-random functions, secret-key encryption and perhaps most interestingly, {\em collision-resistant hash functions}, computable in AC0^0 and secure against all AC^00 circuits. Previously, one-way permutations and pseudo-random generators (with linear stretch) computable in AC0^0 and secure against AC0^0 circuits were known from the works of Håstad and Braverman

    Structure vs. hardness through the obfuscation lens

    No full text
    Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 71-[77]).Cryptography relies on the computational hardness of structured problems. While one-way functions, the most basic cryptographic object, does not seem to require much structure, as we advance up the ranks into public-key cryptography and beyond, we seem to require that certain structured problems are hard. For example, factoring, quadratic residuosity, discrete logarithms, and approximate shortest and closest vectors in lattices all have considerable algebraic structure. This structure, on the one hand, enables useful applications such as public-key and homomorphic encryption, but on the other, also puts their hardness in question. Their structure is exactly what puts them in low complexity classes such as SZK or NP [set-theoretic intersection symbol] coNP, and is in fact the reason behind (sub-exponential or quantum) algorithms for these problems. The question is whether such structure is inherent in different cryptographic primitives, deeming them inherently easier. We study the relationship between two structured complexity classes, statistical zero-knowledge (SZK) and NP [set-theoretic intersection symbol] coNP, and cryptography. To frame the question in a meaningful way, we rely on the language of black-box constructions and separations. Our results are the following: -- Cryptography vs. Structured Hardness: Our two main results show that there are no black-box constructions of hard problems in SZK or NP [set-theoretic intersection symbol] coNP starting from one of a wide variety of cryptographic primitives such as one-way and trapdoor functions, one-way and trapdoor permutations (in the case of SZK), public-key encryption, oblivious transfer, deniable encryption, functional encryption, and even indistinguishability obfuscation; -- Complexity-theoretic Implications: As a corollary of our result, we show a separation between SZK and NP[set-theoretic intersection symbol]coNP and the class PPAD that captures the complexity of computing Nash Equilibria; and -- Positive Results: We construct collision-resistant hashing from a strong form of SZK-hardness and indistinguishability obfuscation. It was previously known that indistinguishability obfuscation by itself does not imply collision-resistant hashing in a black-box way; we show that it does if one adds SZK-hardness as a "catalyst". Our black-box separations are derived using indistinguishability obfuscation as a "gateway", by first showing a (separation) result for indistinguishability obfuscation and then leveraging on the fact that indistinguishability obfuscation can be used to construct the above variety of cryptographic primitives and hard PPAD instances in a black-box manner.by Akshay Degwekar.S.M

    On foundations of public-key encryption and secret sharing

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019Cataloged from PDF version of thesis.Includes bibliographical references (pages 153-164).Since the inception of Cryptography, Information theory and Coding theory have influenced cryptography in myriad ways including numerous information-theoretic notions of security in secret sharing, multiparty computation and statistical zero knowledge; and by providing a large toolbox used extensively in cryptography. This thesis addresses two questions in this realm: Leakage Resilience of Secret Sharing Schemes. We show that classical secret sharing schemes like Shamir secret sharing and additive secret sharing over prime order fields are leakage resilient. Leakage resilience of secret sharing schemes is closely related to locally repairable codes and our results can be viewed as impossibility results for local recovery over prime order fields. As an application of the result, we show the leakage resilience of a variant of the Goldreich-Micali-Wigderson protocol. From Laconic Statistical Zero Knowledge Proofs to Public Key Encryption. Languages with statistical zero knowledge proofs that are also average-case hard have been used to construct various cryptographic primitives. We show that hard languages with laconic SZK proofs, that is proof systems where the communication from the prover to the verifier is small, imply public key encryption.by Akshay Dhananjai Degwekar.Ph. D.Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienc
    corecore